Goto

Collaborating Authors

 pedestrian detection





Revisiting Evaluation of Deep Neural Networks for Pedestrian Detection

Feifel, Patrick, Franke, Benedikt, Bonarens, Frank, Köster, Frank, Raulf, Arne, Schwenker, Friedhelm

arXiv.org Artificial Intelligence

The reliable DNN-based perception of pedestrians represents a crucial step towards automated driving systems. Currently applied metrics for a subset-based evaluation prohibit an application-oriented performance evaluation of DNNs for pedestrian detection. We argue that the current limitation in evaluation can be mitigated by the use of image segmentation. In this work, we leverage the instance and semantic segmentation of Cityscapes to describe a rule-based categorization of potential detection errors for CityPersons. Based on our systematic categorization, the filtered log-average miss rate as a new performance metric for pedestrian detection is introduced. Additionally, we derive and analyze a meaningful upper bound for the confidence threshold. We train and evaluate four backbones as part of a generic pedestrian detector and achieve state-of-the-art performance on CityPersons by using a rather simple architecture. Our results and comprehensible analysis show benefits of the newly proposed performance metrics.



A Comprehensive Review on Artificial Intelligence Empowered Solutions for Enhancing Pedestrian and Cyclist Safety

Zhang, Shucheng, Shi, Yan, Wang, Bingzhang, Zhang, Yuang, Karim, Muhammad Monjurul, Chen, Kehua, Liu, Chenxi, Nasri, Mehrdad, Wang, Yinhai

arXiv.org Artificial Intelligence

Ensuring the safety of vulnerable road users (VRUs), such as pedestrians and cyclists, remains a critical global challenge, as conventional infrastructure-based measures often prove inadequate in dynamic urban environments. Recent advances in artificial intelligence (AI), particularly in visual perception and reasoning, open new opportunities for proactive and context-aware VRU protection. However, existing surveys on AI applications for VRUs predominantly focus on detection, offering limited coverage of other vision-based tasks that are essential for comprehensive VRU understanding and protection. This paper presents a state-of-the-art review of recent progress in camera-based AI sensing systems for VRU safety, with an emphasis on developments from the past five years and emerging research trends. We systematically examine four core tasks, namely detection and classification, tracking and reidentification, trajectory prediction, and intent recognition and prediction, which together form the backbone of AI-empowered proactive solutions for VRU protection in intelligent transportation systems. To guide future research, we highlight four major open challenges from the perspectives of data, model, and deployment. By linking advances in visual AI with practical considerations for real-world implementation, this survey aims to provide a foundational reference for the development of next-generation sensing systems to enhance VRU safety.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper introduces an efficient feature transform of local decorrelation, which when combined with boosted (orthogonal) decision trees, considerably improves over the state-of-the-art on pedestrian detection. Overall, it is a clearly (and nicely) written paper with good analysis, enough details and solid experiments. Pros: - Very well written and executed paper - Attention to detail - Solid results - Straight forward and intuitive method Cons: - Incremental from Hariharan et al. (not major, see later) - If it claims ``Improved Detection'', as opposed to ``Improved Pedestrian Detection'', then I would have liked to see some more results on object detection or likewise. Going from global to local decorrelation, and doing the right analysis for design decisions set it apart.


Automated Model Evaluation for Object Detection via Prediction Consistency and Reliability

Yoo, Seungju, Kwon, Hyuk, Hwang, Joong-Won, Lee, Kibok

arXiv.org Artificial Intelligence

Recent advances in computer vision have made training object detectors more efficient and effective; however, assessing their performance in real-world applications still relies on costly manual annotation. To address this limitation, we develop an automated model evaluation (AutoEval) framework for object detection. We propose Prediction Consistency and Reliability (PCR), which leverages the multiple candidate bounding boxes that conventional detectors generate before non-maximum suppression (NMS). PCR estimates detection performance without ground-truth labels by jointly measuring 1) the spatial consistency between boxes before and after NMS, and 2) the reliability of the retained boxes via the confidence scores of overlapping boxes. For a more realistic and scalable evaluation, we construct a meta-dataset by applying image corruptions of varying severity. Experimental results demonstrate that PCR yields more accurate performance estimates than existing AutoEval methods, and the proposed meta-dataset covers a wider range of detection performance. The code is available at https://github.com/YonseiML/autoeval-det.


Audio-Based Pedestrian Detection in the Presence of Vehicular Noise

Kim, Yonghyun, Han, Chaeyeon, Sarode, Akash, Posner, Noah, Guhathakurta, Subhrajit, Lerch, Alexander

arXiv.org Artificial Intelligence

Audio-based pedestrian detection is a challenging task and has, thus far, only been explored in noise-limited environments. We present a new dataset, results, and a detailed analysis of the state-of-the-art in audio-based pedestrian detection in the presence of vehicular noise. In our study, we conduct three analyses: (i) cross-dataset evaluation between noisy and noise-limited environments, (ii) an assessment of the impact of noisy data on model performance, highlighting the influence of acoustic context, and (iii) an evaluation of the model's predictive robustness on out-of-domain sounds. The new dataset is a comprehensive 1321-hour roadside dataset. It incorporates traffic-rich soundscapes. Each recording includes 16kHz audio synchronized with frame-level pedestrian annotations and 1fps video thumbnails.